Categories

Academia (6)Actors (6)Adversarial Training (7)Agency (6)Agent Foundations (18)AGI (19)AGI Fire Alarm (2)AI Boxing (2)AI Takeoff (8)AI Takeover (6)Alignment (6)Alignment Proposals (11)Alignment Targets (5)ARC (3)Autonomous Weapons (1)Awareness (5)Benefits (2)Brain-based AI (3)Brain-computer Interfaces (1)CAIS (2)Capabilities (20)Careers (16)Catastrophe (31)CHAI (1)CLR (1)Cognition (6)Cognitive Superpowers (9)Coherent Extrapolated Volition (3)Collaboration (5)Community (10)Comprehensive AI Services (1)Compute (8)Consciousness (5)Content (3)Contributing (32)Control Problem (8)Corrigibility (9)Deception (5)Deceptive Alignment (8)Decision Theory (5)DeepMind (4)Definitions (83)Difficulty of Alignment (10)Do What I Mean (2)ELK (3)Emotions (1)Ethics (7)Eutopia (5)Existential Risk (31)Failure Modes (17)FAR AI (1)Forecasting (7)Funding (10)Game Theory (1)Goal Misgeneralization (13)Goodhart's Law (3)Governance (24)Government (3)GPT (3)Hedonium (1)Human Level AI (6)Human Values (12)Inner Alignment (11)Instrumental Convergence (8)Intelligence (17)Intelligence Explosion (7)International (3)Interpretability (16)Inverse Reinforcement Learning (1)Language Models (11)Literature (5)Living document (2)Machine Learning (19)Maximizers (1)Mentorship (8)Mesa-optimization (6)MIRI (3)Misuse (4)Multipolar (4)Narrow AI (4)Objections (64)Open AI (2)Open Problem (6)Optimization (4)Organizations (16)Orthogonality Thesis (5)Other Concerns (8)Outcomes (3)Outer Alignment (15)Outreach (5)People (4)Philosophy (5)Pivotal Act (1)Plausibility (9)Power Seeking (5)Productivity (6)Prosaic Alignment (6)Quantilizers (2)Race Dynamics (5)Ray Kurzweil (1)Recursive Self-improvement (6)Regulation (3)Reinforcement Learning (13)Research Agendas (27)Research Assistants (1)Resources (22)Robots (8)S-risk (6)Sam Bowman (1)Scaling Laws (6)Selection Theorems (1)Singleton (3)Specification Gaming (11)Study (14)Superintelligence (38)Technological Unemployment (1)Technology (3)Timelines (14)Tool AI (2)Transformative AI (4)Transhumanism (2)Types of AI (3)Utility Functions (3)Value Learning (5)What About (9)Whole Brain Emulation (5)Why Not Just (16)

Existential Risk

31 pages tagged "Existential Risk"
Do people seriously worry about existential risk from AI?
Is the UN concerned about existential risk from AI?
If I only care about helping people alive today, does AI safety still matter?
How can progress in non-agentic LLMs lead to capable AI agents?
How might AGI kill people?
How and why should I form my own views about AI safety?
How can I convince others and present the arguments well?
How can I update my emotional state regarding the urgency of AI safety?
Does the importance of AI risk depend on caring about transhumanist utopias?
Why does AI takeoff speed matter?
How likely is extinction from superintelligent AI?
What is the "long reflection"?
What are the main sources of AI existential risk?
Could AI alignment research be bad? How?
Isn’t the real concern with AI something else?
What are some arguments why AI safety might be less important?
What are existential risks (x-risks)?
Are there any detailed example stories of what unaligned AGI would look like?
Will AI be able to think faster than humans?
What is perverse instantiation?
What is AI alignment?
Would a slowdown in AI capabilities development decrease existential risk?
What is reward hacking?
Why should someone who is religious worry about AI existential risk?
Why would a misaligned superintelligence kill everyone in the world?
Aren't AI existential risk concerns just an example of Pascal's mugging?
What is Vingean uncertainty?
What is the "sharp left turn"?
Might anyone use AI to destroy human civilization?
What concepts underlie existential risk from AI?
Predictions about future AI